79 research outputs found

    Real-time interactive visualization of large networks on a tiled display system

    Get PDF
    This paper introduces a methodology for visualizing large real-world (social) network data on a high-resolution tiled display system. Advances in network drawing algorithms enabled real-time visualization and interactive exploration of large real-world networks. However, visualization on a typical desktop monitor remains challenging due to the limited amount of screen space and ever increasing size of real-world datasets.To solve this problem, we propose an integrated approach that employs state-of-the-art network visual-ization algorithms on a tiled display system consisting of multiple screens. Key to our approach is to use the machine's graphics processing units (GPUs) to their fullest extent, in order to ensure an interactive setting with real-time visualization. To realize this, we extended a recent GPU-based implementation of a force-directed graph layout algorithm to multiple GPUs and combined this with a distributed rendering approach in which each graphics card in the tiled display system renders precisely the part of the network to be displayed on the monitors attached to it.Our evaluation of the approach on a 12-screen 25 megapixels tiled display system with three GPUs, demonstrates interactive performance at 60 frames per second for real-world networks with tens of thousands of nodes and edges. This constitutes a performance improvement of approximately 4 times over a single GPU implementation. All the software developed to implement our tiled visualization approach, including the multi-GPU network layout, rendering, display and interaction components, are made available as open-source software.Computer Systems, Imagery and Medi

    Astrophysical Axion Bounds

    Get PDF
    Axion emission by hot and dense plasmas is a new energy-loss channel for stars. Observational consequences include a modification of the solar sound-speed profile, an increase of the solar neutrino flux, a reduction of the helium-burning lifetime of globular-cluster stars, accelerated white-dwarf cooling, and a reduction of the supernova SN 1987A neutrino burst duration. We review and update these arguments and summarize the resulting axion constraints.Comment: Contribution to Axion volume of Lecture Notes in Physics, 20 pages, 3 figure

    Spin fluctuations in nearly magnetic metals from ab-initio dynamical spin susceptibility calculations:application to Pd and Cr95V5

    Full text link
    We describe our theoretical formalism and computational scheme for making ab-initio calculations of the dynamic paramagnetic spin susceptibilities of metals and alloys at finite temperatures. Its basis is Time-Dependent Density Functional Theory within an electronic multiple scattering, imaginary time Green function formalism. Results receive a natural interpretation in terms of overdamped oscillator systems making them suitable for incorporation into spin fluctuation theories. For illustration we apply our method to the nearly ferromagnetic metal Pd and the nearly antiferromagnetic chromium alloy Cr95V5. We compare and contrast the spin dynamics of these two metals and in each case identify those fluctuations with relaxation times much longer than typical electronic `hopping times'Comment: 21 pages, 9 figures. To appear in Physical Review B (July 2000

    Search for Gravitational Waves Associated with Gamma-Ray Bursts Detected by Fermi and Swift during the LIGO-Virgo Run O3b

    Get PDF
    We search for gravitational-wave signals associated with gamma-ray bursts (GRBs) detected by the Fermi and Swift satellites during the second half of the third observing run of Advanced LIGO and Advanced Virgo (2019 November 1 15:00 UTC-2020 March 27 17:00 UTC). We conduct two independent searches: A generic gravitational-wave transients search to analyze 86 GRBs and an analysis to target binary mergers with at least one neutron star as short GRB progenitors for 17 events. We find no significant evidence for gravitational-wave signals associated with any of these GRBs. A weighted binomial test of the combined results finds no evidence for subthreshold gravitational-wave signals associated with this GRB ensemble either. We use several source types and signal morphologies during the searches, resulting in lower bounds on the estimated distance to each GRB. Finally, we constrain the population of low-luminosity short GRBs using results from the first to the third observing runs of Advanced LIGO and Advanced Virgo. The resulting population is in accordance with the local binary neutron star merger rate. © 2022. The Author(s). Published by the American Astronomical Society

    Narrowband Searches for Continuous and Long-duration Transient Gravitational Waves from Known Pulsars in the LIGO-Virgo Third Observing Run

    Get PDF
    Isolated neutron stars that are asymmetric with respect to their spin axis are possible sources of detectable continuous gravitational waves. This paper presents a fully coherent search for such signals from eighteen pulsars in data from LIGO and Virgo's third observing run (O3). For known pulsars, efficient and sensitive matched-filter searches can be carried out if one assumes the gravitational radiation is phase-locked to the electromagnetic emission. In the search presented here, we relax this assumption and allow both the frequency and the time derivative of the frequency of the gravitational waves to vary in a small range around those inferred from electromagnetic observations. We find no evidence for continuous gravitational waves, and set upper limits on the strain amplitude for each target. These limits are more constraining for seven of the targets than the spin-down limit defined by ascribing all rotational energy loss to gravitational radiation. In an additional search, we look in O3 data for long-duration (hours-months) transient gravitational waves in the aftermath of pulsar glitches for six targets with a total of nine glitches. We report two marginal outliers from this search, but find no clear evidence for such emission either. The resulting duration-dependent strain upper limits do not surpass indirect energy constraints for any of these targets. © 2022. The Author(s). Published by the American Astronomical Society

    Open data from the third observing run of LIGO, Virgo, KAGRA, and GEO

    Get PDF
    The global network of gravitational-wave observatories now includes five detectors, namely LIGO Hanford, LIGO Livingston, Virgo, KAGRA, and GEO 600. These detectors collected data during their third observing run, O3, composed of three phases: O3a starting in 2019 April and lasting six months, O3b starting in 2019 November and lasting five months, and O3GK starting in 2020 April and lasting two weeks. In this paper we describe these data and various other science products that can be freely accessed through the Gravitational Wave Open Science Center at https://gwosc.org. The main data set, consisting of the gravitational-wave strain time series that contains the astrophysical signals, is released together with supporting data useful for their analysis and documentation, tutorials, as well as analysis software packages

    Constraints on the cosmic expansion history from GWTC–3

    Get PDF
    We use 47 gravitational wave sources from the Third LIGO–Virgo–Kamioka Gravitational Wave Detector Gravitational Wave Transient Catalog (GWTC–3) to estimate the Hubble parameter H(z), including its current value, the Hubble constant H0. Each gravitational wave (GW) signal provides the luminosity distance to the source, and we estimate the corresponding redshift using two methods: the redshifted masses and a galaxy catalog. Using the binary black hole (BBH) redshifted masses, we simultaneously infer the source mass distribution and H(z). The source mass distribution displays a peak around 34 M⊙, followed by a drop-off. Assuming this mass scale does not evolve with the redshift results in a H(z) measurement, yielding H0=68−8+12 km   s−1 Mpc−1{H}_{0}={68}_{-8}^{+12}\,\mathrm{km}\ \,\ {{\rm{s}}}^{-1}\,{\mathrm{Mpc}}^{-1} (68% credible interval) when combined with the H0 measurement from GW170817 and its electromagnetic counterpart. This represents an improvement of 17% with respect to the H0 estimate from GWTC–1. The second method associates each GW event with its probable host galaxy in the catalog GLADE+, statistically marginalizing over the redshifts of each event's potential hosts. Assuming a fixed BBH population, we estimate a value of H0=68−6+8 km   s−1 Mpc−1{H}_{0}={68}_{-6}^{+8}\,\mathrm{km}\ \,\ {{\rm{s}}}^{-1}\,{\mathrm{Mpc}}^{-1} with the galaxy catalog method, an improvement of 42% with respect to our GWTC–1 result and 20% with respect to recent H0 studies using GWTC–2 events. However, we show that this result is strongly impacted by assumptions about the BBH source mass distribution; the only event which is not strongly impacted by such assumptions (and is thus informative about H0) is the well-localized event GW190814

    Exploiting GPUs for Fast Force-Directed Visualization of Large-Scale Networks

    No full text
    Network analysis software relies on graph layout algorithms to enable users to visually explore network data. Nowadays, networks easily consist of millions of nodes and edges, resulting in hours of computation time to obtain a readable graph layout on a typical workstation. Although these machines usually do not have a very large number of CPU cores, they can easily be equipped with Graphics Processing Units (GPUs), opening up the possibility of exploiting hundreds or even thousands of cores to counter the aforementioned computational challenges. In this paper we introduce a novel GPU framework for visualizing large real-world network data. The main focus is on a GPU implementation of force-directed graph layout algorithms, which are known to create high quality network visualizations. The proposed framework is used to parallelize the well-known ForceAtlas2 algorithm, which is widely used in many popular network analysis packages and toolkits. The different procedures and data structures of the algorithm are adjusted to the CUDA GPU architecture's specifics in terms of memory coalescing, shared memory usage and thread workload balance. To evaluate its performance, the GPU implementation is tested using a diverse set of 38 different large-scale real-world networks. This allows for a thorough characterization of the parallelizable components of both force-directed layout algorithms in general as well as the proposed GPU framework as a whole. Experiments demonstrate how the approach can efficiently process very large real-world networks, showing overall speedup factors between 40x and 123x compared to existing CPU implementations. In practice, this means that a network with 4 million nodes and 120 million edges can be visualized in 14 minutes rather than 9 hours
    • 

    corecore